11 research outputs found

    Implementation of binary stochastic STDP learning using chalcogenide-based memristive devices

    Full text link
    The emergence of nano-scale memristive devices encouraged many different research areas to exploit their use in multiple applications. One of the proposed applications was to implement synaptic connections in bio-inspired neuromorphic systems. Large-scale neuromorphic hardware platforms are being developed with increasing number of neurons and synapses, having a critical bottleneck in the online learning capabilities. Spike-timing-dependent plasticity (STDP) is a widely used learning mechanism inspired by biology which updates the synaptic weight as a function of the temporal correlation between pre- and post-synaptic spikes. In this work, we demonstrate experimentally that binary stochastic STDP learning can be obtained from a memristor when the appropriate pulses are applied at both sides of the device

    Fast vision through frameless event-based sensing and convolutional processing: Application to texture recognition

    Get PDF
    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.This work was supported in part by the Spanish Ministry of Education and Science under Grant TEC-2006-11730-C03-01 (SAMANTA2), be the Andalusian regional government under Grant P06-TIC-01417 (Brain System), and by the European Union (EU) Grants IST-2001-34124 (CAVIAR) and 216777 (NABAB). The work of J. A. Pérez-Carrasco was supported by a doctoral scholarship as part of research project Brain System.Peer reviewe

    On scalable spiking convnet hardware for cortex-like visual sensory processing systems

    No full text
    This paper summarizes how Convolutional Neural Networks (ConvNets) can be implemented in hardware using Spiking neural network Address-Event-Representation (AER) technology, for sophisticated pattern and object recognition tasks operating at mili second delay throughputs. Although such hardware would require hundreds of individual convolutional modules and thus is presently not yet available, we discuss methods and technologies for implementing it in the near future. On the other hand, we provide precise behavioral simulations of large scale spiking AER convolutional hardware and evaluate its performance, by using performance figures of already available AER convolution chips fed with real sensory data obtained from physically available AER motion retina chips. We provide simulation results of systems trained for people recognition, showing recognition delays of a few miliseconds from stimulus onset. ConvNets show good up scaling behavior and possibilities for being implemented efficiently with new nano scale hybrid CMOS/nonCMOS technologies.Peer Reviewe

    Neocortical frame-free vision sensing and processing through scalable Spiking ConvNet hardware

    No full text
    This paper summarizes how Convolutional Neural Networks (ConvNets) can be implemented in hardware using Spiking neural network Address-Event-Representation (AER) technology, for sophisticated pattern and object recognition tasks operating at mili second delay throughputs. Although such hardware would require hundreds of individual convolutional modules and thus is presently not yet available, we discuss methods and technologies for implementing it in the near future. On the other hand, we provide precise behavioral simulations of large scale spiking AER convolutional hardware and evaluate its performance, by using performance figures of already available AER convolution chips fed with real sensory data obtained from physically available AER motion retina chips. We provide simulation results of systems trained for people recognition, showing recognition delays of a few miliseconds from stimulus onset. ConvNets show good up scaling behavior and possibilities for being implemented efficiently with new nano scale hybrid CMOS/nonCMOS technologies.Peer Reviewe

    CAVIAR: a 45k neuron, 5M synapse, 12G connects/s AER hardware sensory-processing-learning-actuating system for high-speed visual object recognition and tracking

    Get PDF
    This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies

    Nano functional neural interfaces

    No full text
    corecore